Introduction
In this storyboard, I wil try to convince you of the existance of the genre Electronic Singer-Songwriters. Acoustic and electronic music are a world apart. The music is very different which is inherently caused by the use of different instruments.
When asked ‘What is a singer-songwriter?’ many people will think about a person who just plays guitar and sings. This would be an acoustic singer-songwriter.
I define acoustic singer-songwriters as one person that sings and plays exclusively on one acoustic instrument, mainly guitar or piano. Examples are Passenger and Ed Sheeran. In most acoustic singer-songwriter songs, a little percussion is included, for example a cajon.
I argue that electronic singer-songwriters exist as well.
Electronic singer-songwriters are defined by me as people who produce their own music containing (not necessarily but also not limited to) synthesizers, looping stations or drums computers, and are able to perform it live. The music must contain roughly the same amount of singing as the acoustic songs. Examples are Chet Faker and JAIN.
The question that will be examined in this report is: Does there exists a genre ‘Electronic Singer-Songwriters’ that is different from ‘Acoustic Song-Songwriters’?
This implies that there have to be noticeble differences between the two genres, but also similarities that represent singer-songwriters.
In this investigation, 2 corpora consisting of 227 songs per genre will be compared. For more information about the playlists, see the ‘Corpora’ section.
First, the two corpora will be compared as a whole by a classifcation and clustering of all the songs. Then the track-level features like danceability and acousticness will be compared among the corpora. Per corpus, 2 representative songs will be chosen and will be subject to a chromagram, self-similarity matrix, keygram and tempogram analyses. Lastly, the 2 corpora will be compared on the Spotify timbre coefficients.
Corpora
There are two corpora that will be compared with each other. These are Spotify playlists. For the acoustic singer-songwriters, a playlist generated by Spotify that was called: “Acoustic Singer-songwriters” was chosen. The playlist was checked manually and all the songs that dissatisfied the criteria stated above were removed. This corpus represents the acoustic singer songwriters pretty good. There are 227 songs in this playlist.
The playlist for the electronic singer-songwriters was made by a Spotify user and complemented by me. There are 227 songs in this playlist. It contains many songs that fulfill the criteria from above, but also some songs that belong to the indie genre, with lots of electric guitars, but no electronic instruments. These songs were manually filtered from this playlist as much as possible.
Acoustic Singer-Songwriters:
Accuracy: 77.25
Precision: 73.81
Electronic Singer-Songwriters:
Accuracy: 70.04
Precision: 76.44
The results of the confusion matrix are promising. Most of the songs are classified in the right genre. The accuracy is not incredibly high, but there is enough evidence that the genres are different.
The most distinguishing features between the two playlists are:
The mean danceability of the acoustic playlist is 0.56, (SD = 0.11). and the mean danceability of the electronic playlist is 0.62, (SD = 0.16).
The mean speechiness of the acoustic playlist is 0.04, (SD = 0.01). and the mean speechiness of the electronic playlist is 0.07, (SD = 0.07).
The mean acousticness of the acoustic playlist is 0.66, (SD = 0.27). and the mean acousticness of the electronic playlist is 0.42, (SD = 0.29).
To find a representative songs per playlists, songs that have featuresvalues around these means will be chosen.
Acoustic playlist: ‘Disappears With Time’ by Tangerine (acousticess = 0.664).
Electronic playlist: ‘Maybe You’re The Reason’ by The Japanese House (danceability = 0.621).
Timbre features Timbre feature c04 describes songs with a stronger attack. Timbre feature c06 describes songs that start with a low energy, persumably an intro, have a bright middle, and a medium end. Timbre feature c08 describes songs that start with bright high frequencies and end with bright low frequencies.
The timbre coefficients will be discussed more thouroughly in the last section of this notebook.
To see whether the playlists would be seperated in a clustering task, a new playlist was created with 25 representative acoustic songs and 25 representative electronic songs. The acoustic songs were selected so that their acousticness was around the mean acousticness of the whole acoustic playlist. The electronic songs were selected in the same way, but danceability was used instead of acousticness.
Ideally, the dendrogram would show us at least two large clusters with one containing all 25 acoustic songs and one containing all 25 electronic songs. This is not the case, but the songs are clustered in groups that consists almost solely of electronic or acoustic songs. The song ‘Why Don’t You Listen’ is the only song that occurs on it’s own.
This clustering shown by the dendrogram could mean that the songs clearly beling to a different genre and are grouped together with this genre based on their features.
Along withe results of the confusion matrix, this is evidence for the existance of the genre ‘Electronic Singer-Songwriters’.
Note: ‘Instrumentalness’ is represented by the size of the dots.
Mode
This graph shows the noticable differences between the playlists. The largest difference seems to be the mode. There are 20 minor songs in the acoustic playlist, which is a percentage of 8.81%.
In the electronic playlist, there are 80 songs, which is a percentage of 35.24%.
Acousticness
The acoustic songs are more acoustic than the electronic songs, which makes a lot of sense.
Danceability
None of the acoustic songs has a danceability higher than 0.8. Of the electronic songsa lot more have a danceability higher than or close to 0.8. There seems to be a small trade-off between acousticness and danceability.
Instrumentalness
And lastly, electronic songs seem to be more instrumental than acoustic songs.
The mean instrumentalness of the acoustic playlist is 0.04, (SD = 0.11). and the mean instrumentalness of the electronic playlist is 0.12, (SD = 0.24).
These graphs seems very similar, except for the tempo. The acoustic songs all have roughly the same tempo, but the electronic songs have more variety in tempo and overall a higher tempo.
The mean tempo of the acoustic playlist is 116.56, (SD = 29.97). and the mean tempo of the electronic playlist is 119.89, (SD = 27.65).
This difference is very small, so it belongs in this graph.
Two differences can be seen between these chromagrams:
In ‘Maybe You’re The Reason’, more pitch classes are used than in ‘Disappears In Time’. This could be explained by the fact that more different instruments can be used by electronic singer-songwriters due to the use of a machine that can play multiple sounds at the same time.
In ‘Disappears With Time’, the blue rectangles are wider than in ‘Maybe You’re The Reason’. This could mean that the notes or chords in ‘Disappears With Time’ are held longer, which could be a clue that the tempo of that song is lower. The tempo of ‘Disappears With Time is 127 BPM and the tempo of ’Maybe You’re The Reason’ is 95 BPM so this is not true. Another explanation could be that there are more chords in ‘Disappears With Time’ and that ‘Maybe You’re The Reason’ has more notes played in succesion instead of together. When these songs are listend to, this seems to be the case.
This one example could be a little bit of evidence that electronic songs use more successive notes and acoustic songs use more chords.
Recall that one of the distinctive differences between the playlists found was pitch class C. ‘Maybe You’re The Reason’ has more energy in pitch class C than ‘Disappears With Time’. This could very well be accidental, but it is worth showing. It could be the case that electronic songs are relatively simpler with regards to use of keys and chords, since the variation in sounds can be used to compensate for this. This could result in more songs being in the key of C, which is the most ‘basic’ key. Acoustic muscians are maybe pushed to write more variation in key since they can only use one instrument and their voice. The investigation of this falls out of the scope of this course, but the use of keys can be investigated more. A graph displaying the used keys per playlist is shown on the next tab.
C is the most used key in the electronic playlist, but it is also the most used key in the acoustic playlist. It is true that C is used more in electronic songs than in acoustic songs, but the difference is not big.
27 of the in total 227 acoustic songs are in C, which is a percentage of 12%.
34 of the in total 227 electronic songs are in C, which is a percentage of 15%.
This difference seems too small too draw conclusions from.
When looking at the chroma Self Similarity Matrices, it seems that the ‘Disappears With Time’ is more structured in the sense that different sections alternate regularly. The SSM shows a checkerboard. In the song, there is a riff that is repeated very often and the rest of the song is very repetitive.
‘Maybe You’re The Reason’ is less structured and this is visible in the chromagram as well. Especially in the verses not a lot of music is played, there is just singing. There is an intro riff that is repeated in the chorus, but the chorus contains a lot of other instruments. The verses and chorusses are not always the same. This explains the chroma-based SSM.
Basd on this example, it could carefully be posed that acoustic songs seems to be more repetitive and structured, but to be able to strengthen this hypothesis, the timbre-based SSM’s have to be compared, which is done on the next page.
The timbre-based SSM of ‘Disappears With Time’ is less bright than the chromagram, but equally structured. The song has roughly the same timbre during the whole time; there is not much variation in sound.
‘Maybe You’re The Reason’ has a more structured timbre- than chroma-based SSM. The timbre of the verses and chorusses is clearly different and separable. So ‘Maybe You’re The Reason’ has more structured timbrevariation.
This difference can partly be explained by the possibility for electronic songs to use more instruments, since it is easier to make variation with more instruments. Nevertheless, the hypthesis does still stand:
Acoustic songs seem to have more repetition in their chroma’s and timbre and electronic songs seem to have more variation.
The difference in tempograms is very clear: ‘Maybe You’re The Reason’ keeps a very steady tempo and ‘Disappears With Time’ shows a shakier line. This is easily explained: ‘Maybe You’re The Reason’ is probably produced with a computer beat that maintains the exact same tempo the whole song and ‘Disappears With Time’ is probably recordeed while someone was playing the guitar live.
It can be concluded that the tempo of electronic songs, provided that the drums were created with a drum computer have a steadier tempo than acoustic songs.
To return on the timbre coefficients:
The timbre coefficients that were the most different between the playlists, according to the classifier, were coefficients c04, c06, and c08.
This timbre coefficients graph is in accordance with this. The biggest differences are in these 3 coefficients. This graph shows us how these differences are expressed.
The electronic playlist has more of timbre features c04 and c08. This would mean that electronic songs tend to have a stronger attack and start with instruments that have higher frequencies and end with songs that have lower frequencies, like bass.
The acoustic playlist has more of timbre feature c06. This would mean that acoustic songs tend to start low-energy, presumably a soft intro, build up in the middle part and end with less energy, presumably an outro.
We have compared the two corpora in a lot of ways and found similarities and differences.
The classifier that was build could group around 70% songs in the right genre, but 30% of the songs were grouped wrong. This means that there is similarity between the corpora up to a certain degree, but that they are mainly different. The dendrogram shows the same results.
The main similarities that were found were that both genres have a low speechiness, an average tempo, an average loudness and most of the Spotify timbre coefficients are the same. It could be argued that these similarities represent singer-songwriters, but that has to be investigated more since it was not the main goal of this research.
Electronic songs tend to be more moody (they have more minor songs) than acoustic ones, have more instrumental breaks and are more danceable. They also have more variation in pitch classes and a steadier tempo. With regards to the timbre, electronic songs have a stronger attack.
Acoustic songs tend to have more structure, both in their pitch classes and timbre. They also tend to have more soft intro’s and outro’s.
Based on these conclusions, we can say with some confidence that electronic singer-songwriters make music in a genre that is different from but comparable with the music that acoustic singer-songwriters make.
It should be noted that a large part of the comparison was performed on 2 songs that were representative for each of the corpora, but to draw a stronger conclusion, more songs have to be compared.
Link to the playlists:
Acoustic playlist: https://open.spotify.com/user/11122498650/playlist/0a2aX4LlpTkeYLuTrVUu88?si=Nrjnwh-gRqSFDZyBX9zjMg
Electronic playlist: https://open.spotify.com/user/11122498650/playlist/2cANvNRpfzoZ6rkoaJQg25?si=0Uive1_3RFuKjGakTXaKyQ